Are extralinguistic signals such as image pixels crucial for inducing constituency grammars? While past work has shown substantial gains from multimodal cues, we investigate whether such gains persist in the presence of rich information from large language models (LLMs). We find that our approach, LLM-based C-PCFG (LC-PCFG), outperforms previous multi-modal methods on the task of unsupervised constituency parsing, achieving state-of-the-art performance on a variety of datasets. Moreover, LC-PCFG results in an over 50% reduction in parameter count, and speedups in training time of 1.7x for image-aided models and more than 5x for video-aided models, respectively. These results challenge the notion that extralinguistic signals such as image pixels are needed for unsupervised grammar induction, and point to the need for better text-only baselines in evaluating the need of multi-modality for the task.
translated by 谷歌翻译
Vehicle-to-Everything (V2X) communication has been proposed as a potential solution to improve the robustness and safety of autonomous vehicles by improving coordination and removing the barrier of non-line-of-sight sensing. Cooperative Vehicle Safety (CVS) applications are tightly dependent on the reliability of the underneath data system, which can suffer from loss of information due to the inherent issues of their different components, such as sensors failures or the poor performance of V2X technologies under dense communication channel load. Particularly, information loss affects the target classification module and, subsequently, the safety application performance. To enable reliable and robust CVS systems that mitigate the effect of information loss, we proposed a Context-Aware Target Classification (CA-TC) module coupled with a hybrid learning-based predictive modeling technique for CVS systems. The CA-TC consists of two modules: A Context-Aware Map (CAM), and a Hybrid Gaussian Process (HGP) prediction system. Consequently, the vehicle safety applications use the information from the CA-TC, making them more robust and reliable. The CAM leverages vehicles path history, road geometry, tracking, and prediction; and the HGP is utilized to provide accurate vehicles' trajectory predictions to compensate for data loss (due to communication congestion) or sensor measurements' inaccuracies. Based on offline real-world data, we learn a finite bank of driver models that represent the joint dynamics of the vehicle and the drivers' behavior. We combine offline training and online model updates with on-the-fly forecasting to account for new possible driver behaviors. Finally, our framework is validated using simulation and realistic driving scenarios to confirm its potential in enhancing the robustness and reliability of CVS systems.
translated by 谷歌翻译
We introduce Structured 3D Features, a model based on a novel implicit 3D representation that pools pixel-aligned image features onto dense 3D points sampled from a parametric, statistical human mesh surface. The 3D points have associated semantics and can move freely in 3D space. This allows for optimal coverage of the person of interest, beyond just the body shape, which in turn, additionally helps modeling accessories, hair, and loose clothing. Owing to this, we present a complete 3D transformer-based attention framework which, given a single image of a person in an unconstrained pose, generates an animatable 3D reconstruction with albedo and illumination decomposition, as a result of a single end-to-end model, trained semi-supervised, and with no additional postprocessing. We show that our S3F model surpasses the previous state-of-the-art on various tasks, including monocular 3D reconstruction, as well as albedo and shading estimation. Moreover, we show that the proposed methodology allows novel view synthesis, relighting, and re-posing the reconstruction, and can naturally be extended to handle multiple input images (e.g. different views of a person, or the same view, in different poses, in video). Finally, we demonstrate the editing capabilities of our model for 3D virtual try-on applications.
translated by 谷歌翻译
In this paper, we present the Multi-view Extended Videos with Identities (MEVID) dataset for large-scale, video person re-identification (ReID) in the wild. To our knowledge, MEVID represents the most-varied video person ReID dataset, spanning an extensive indoor and outdoor environment across nine unique dates in a 73-day window, various camera viewpoints, and entity clothing changes. Specifically, we label the identities of 158 unique people wearing 598 outfits taken from 8, 092 tracklets, average length of about 590 frames, seen in 33 camera views from the very large-scale MEVA person activities dataset. While other datasets have more unique identities, MEVID emphasizes a richer set of information about each individual, such as: 4 outfits/identity vs. 2 outfits/identity in CCVID, 33 viewpoints across 17 locations vs. 6 in 5 simulated locations for MTA, and 10 million frames vs. 3 million for LS-VID. Being based on the MEVA video dataset, we also inherit data that is intentionally demographically balanced to the continental United States. To accelerate the annotation process, we developed a semi-automatic annotation framework and GUI that combines state-of-the-art real-time models for object detection, pose estimation, person ReID, and multi-object tracking. We evaluate several state-of-the-art methods on MEVID challenge problems and comprehensively quantify their robustness in terms of changes of outfit, scale, and background location. Our quantitative analysis on the realistic, unique aspects of MEVID shows that there are significant remaining challenges in video person ReID and indicates important directions for future research.
translated by 谷歌翻译
眼睛的临床诊断是对多种数据模式进行的,包括标量临床标签,矢量化生物标志物,二维底面图像和三维光学相干性层析成像(OCT)扫描。临床从业者使用所有可用的数据模式来诊断和治疗糖尿病性视网膜病(DR)或糖尿病黄斑水肿(DME)等眼部疾病。在眼科医学领域启用机器学习算法的使用需要研究治疗期内所有相关数据之间的关系和相互作用。现有的数据集受到限制,因为它们既不提供数据,也没有考虑数据模式之间的显式关系建模。在本文中,我们介绍了用于研究以上限制的视觉眼睛语义(橄榄)数据集的眼科标签。这是第一个OCT和近IIR眼底数据集,其中包括临床标签,生物标记标签,疾病标签和时间序列的患者治疗信息,来自相关临床试验。该数据集由1268个近红外图像组成,每个图像至少具有49个10月扫描和16个生物标志物,以及4个临床标签和DR或DME的疾病诊断。总共有96张眼睛的数据在至少两年的时间内平均,每只眼睛平均治疗66周和7次注射。我们在医学图像分析中为橄榄数据集进行了橄榄数据集的实用性,并为核心和新兴机器学习范式提供了基准和具体研究方向。
translated by 谷歌翻译
现代神经网络使用构建块,例如与任意2D翻译一样的卷积。但是,这些香草块并不等于投影歧管中的任意3D翻译。即便如此,所有单眼3D检测器都使用香草块来获得3D坐标,这是为此而不是为香草块设计的任务。本文迈出了朝着探索综合的第一步,以在投影歧管中进行任意3D翻译。由于该深度是最难估计的单眼检测,因此本文提出了深度模棱两可的网络(deviant),该网络(deviant)构建了现有的量表等效性的可检测块。结果,偏差与投影歧管中的深度翻译相等,而香草网络却没有。额外的深度竞争力迫使这种偏差学习一致的深度估计,因此,越来越多的人在纯图像类别中的Kitti和Waymo数据集上实现了最新的单眼3D检测结果,并使用额外信息竞争地对方法进行了竞争性执行。此外,在跨数据库评估中,异常比香草网络更好。 https://github.com/abhi1kumar/deviant的代码和模型
translated by 谷歌翻译
自动语音识别(ASR)是新服务的关键元素,可帮助用户与自动化系统进行交互。深度学习方法使得用单词错误率低于5%的英语ASR部署系统成为可能。但是,这些方法的使用仅适用于具有数百或数千小时音频及其相应转录的语言。为了使所谓的低资源语言加快可以改善其ASR系统性能的资源的可用性,正在研究基于现有的资源来创建新资源的方法。在本文中,我们描述了我们的数据增强方法,以改善低资源和凝集性语言的ASR模型的结果。我们使用Wav2letter ++模型进行了为Quechua开发ASR的实验。通过我们的基本模型方法,我们将WER降低了8.73%。由此产生的ASR模型获得了22.75%的WER,并接受了99小时的原始资源和99小时的合成数据的培训,并结合了文本增强和合成语音发电
translated by 谷歌翻译
Huqariq语料库是秘鲁本地语言的多语言集合。转录后的语料库旨在研究和开发语音技术,以保护秘鲁的濒危语言。Huqariq主要设计用于开发自动语音识别,语言识别和文本到语音工具。为了可持续获得语料库收集,我们采用众包方法。Huqariq包括秘鲁的四种母语,预计到2022年底,秘鲁的48种母语中最多可以达到20种母语。该语料库有500多名志愿者记录的220个小时的转录音频,使其成为秘鲁母语最大的语料库。为了验证语料库的质量,我们使用220小时的完全转录音频提出语音识别实验。
translated by 谷歌翻译
我们提出了一种基于优化的新型范式,用于在图像和扫描上拟合3D人类模型。与直接回归输入图像中低维统计体模型(例如SMPL)的参数的现有方法相反,我们训练了每个vertex神经场网络的集合。该网络以分布式的方式预测基于当前顶点投影处提取的神经特征的顶点下降方向。在推断时,我们在梯度降低的优化管道中采用该网络,称为LVD,直到其收敛性为止,即使将所有顶点初始化为单个点,通常也会以一秒钟的分数出现。一项详尽的评估表明,我们的方法能够捕获具有截然不同的身体形状的穿着的人体,与最先进的人相比取得了重大改进。 LVD也适用于人类和手的3D模型配合,为此,我们以更简单,更快的方法对SOTA显示出显着改善。
translated by 谷歌翻译
在本文中,我们提出了一种新的方法来增强从单个可佩戴相机捕获的视频计算的人的3D身体姿势估计。关键的想法是利用在联合嵌入空间中链接第一和第三次视图的高级功能。为了了解这样的嵌入空间,我们介绍了First2第三姿势,这是一个近2,000个视频的新配对同步数据集,描绘了从第一和第三视角捕获的人类活动。我们明确地考虑了空间和运动域功能,同时使用以自我监督的方式培训的半暹罗架构。实验结果表明,使用我们的数据集学习的联合多视图嵌入式空间可用于从任意单视图的自拍视频中提取歧视特征,而无需需要域适应,也不知道相机参数。在三种监督最先进的方法中,我们在两个无约束数据集中实现了重大改善了两个无约束的数据集。我们的数据集和代码将可用于研究目的。
translated by 谷歌翻译